Modern autonomous driving system is characterized as modular tasks in sequential order, i.e., perception, prediction and planning. As sensors and hardware get improved, there is trending popularity to devise a system that can perform a wide diversity of tasks to fulfill higher-level intelligence. Contemporary approaches resort to either deploying standalone models for individual tasks, or designing a multi-task paradigm with separate heads. These might suffer from accumulative error or negative transfer effect. Instead, we argue that a favorable algorithm framework should be devised and optimized in pursuit of the ultimate goal, i.e. planning of the self-driving-car. Oriented at this goal, we revisit the key components within perception and prediction. We analyze each module and prioritize the tasks hierarchically, such that all these tasks contribute to planning (the goal). To this end, we introduce Unified Autonomous Driving (UniAD), the first comprehensive framework up-to-date that incorporates full-stack driving tasks in one network. It is exquisitely devised to leverage advantages of each module, and provide complementary feature abstractions for agent interaction from a global perspective. Tasks are communicated with unified query design to facilitate each other toward planning. We instantiate UniAD on the challenging nuScenes benchmark. With extensive ablations, the effectiveness of using such a philosophy is proven to surpass previous state-of-the-arts by a large margin in all aspects. The full suite of codebase and models would be available to facilitate future research in the community.
translated by 谷歌翻译
异常检测任务在AI安全中起着至关重要的作用。处理这项任务存在巨大的挑战。观察结果表明,深度神经网络分类器通常倾向于以高信心将分布(OOD)输入分为分配类别。现有的工作试图通过在培训期间向分类器暴露于分类器时明确对分类器施加不确定性来解决问题。在本文中,我们提出了一种替代概率范式,该范式实际上对OOD检测任务既有用,又可行。特别是,我们在培训过程中施加了近距离和离群数据之间的统计独立性,以确保inlier数据在培训期间向深度估计器显示有关OOD数据的信息很少。具体而言,我们通过Hilbert-Schmidt独立标准(HSIC)估算了Inlier和离群数据之间的统计依赖性,并在培训期间对此类度量进行了惩罚。我们还将方法与推理期间的新型统计测试相关联,加上我们的原则动机。经验结果表明,我们的方法对各种基准测试的OOD检测是有效且可靠的。与SOTA模型相比,我们的方法在FPR95,AUROC和AUPR指标方面取得了重大改进。代码可用:\ url {https://github.com/jylins/hone}。
translated by 谷歌翻译
本文介绍了Kings Arena的荣誉,Kings Arena是基于国王荣誉的强化学习(RL)环境,这是世界上最受欢迎的游戏之一。与以前大多数工作中研究的其他环境相比,我们的人对竞争性强化学习提出了新的概括挑战。与对手竞争的一个代理商是一个多代理的问题;它需要概括能力,因为它具有控制和不同的对手竞争的不同目标。我们描述了国王域名荣誉的观察,动作和奖励规范,并提供了一个基于python的开源界面,以与游戏引擎进行通信。我们为纪念国王竞技场的二十个目标英雄提供了各种任务,并为具有可行的计算资源的基于RL的方法提供了初始基线结果。最后,我们展示了国王竞技场的荣誉和对挑战的可能补救措施所面临的概括挑战。所有软件(包括环境级)均可在https://github.com/tencent-ailab/hok_env上公开获得。该文档可在https://aiarena.tencent.com/hok/doc/上获得。
translated by 谷歌翻译
在鸟眼中学习强大的表现(BEV),以进行感知任务,这是趋势和吸引行业和学术界的广泛关注。大多数自动驾驶算法的常规方法在正面或透视视图中执行检测,细分,跟踪等。随着传感器配置变得越来越复杂,从不同的传感器中集成了多源信息,并在统一视图中代表功能至关重要。 BEV感知继承了几个优势,因为代表BEV中的周围场景是直观和融合友好的。对于BEV中的代表对象,对于随后的模块,如计划和/或控制是最可取的。 BEV感知的核心问题在于(a)如何通过从透视视图到BEV来通过视图转换来重建丢失的3D信息; (b)如何在BEV网格中获取地面真理注释; (c)如何制定管道以合并来自不同来源和视图的特征; (d)如何适应和概括算法作为传感器配置在不同情况下各不相同。在这项调查中,我们回顾了有关BEV感知的最新工作,并对不同解决方案进行了深入的分析。此外,还描述了该行业的BEV方法的几种系统设计。此外,我们推出了一套完整的实用指南,以提高BEV感知任务的性能,包括相机,激光雷达和融合输入。最后,我们指出了该领域的未来研究指示。我们希望该报告能阐明社区,并鼓励对BEV感知的更多研究。我们保留一个活跃的存储库来收集最新的工作,并在https://github.com/openperceptionx/bevperception-survey-recipe上提供一包技巧的工具箱。
translated by 谷歌翻译
生产级别的工作流程用于产生令人信服的3D动态人体面孔长期以来依赖各种劳动密集型工具用于几何和纹理生成,运动捕获和索具以及表达合成。最近的神经方法可以使单个组件自动化,但是相应的潜在表示不能像常规工具一样为艺术家提供明确的控制。在本文中,我们提出了一种新的基于学习的,视频驱动的方法,用于生成具有高质量基于物理资产的动态面部几何形状。对于数据收集,我们构建了一个混合多视频测量捕获阶段,与超快速摄像机耦合以获得原始的3D面部资产。然后,我们着手使用单独的VAE对面部表达,几何形状和基于物理的纹理进行建模,我们在各个网络的潜在范围内强加了基于全局MLP的表达映射,以保留各个属性的特征。我们还将增量信息建模为基于物理的纹理的皱纹图,从而达到高质量的4K动态纹理。我们展示了我们在高保真表演者特异性面部捕获和跨认同面部运动重新定位中的方法。此外,我们的基于多VAE的神经资产以及快速适应方案也可以部署以处理内部视频。此外,我们通过提供具有较高现实主义的各种有希望的基于身体的编辑结果来激发我们明确的面部解散策略的实用性。综合实验表明,与以前的视频驱动的面部重建和动画方法相比,我们的技术提供了更高的准确性和视觉保真度。
translated by 谷歌翻译
本地化自然场景中的文本实例被认为是计算机愿景中的根本挑战。尽管如此,由于实际场景中的极其方向性和文本实例的尺度,大多数传统的文本检测器都遭受子文本问题,该问题仅定位文本实例的片段(即,子文本)。在这项工作中,我们定量分析了子文本问题,并提出了一种简单但有效的设计,对比关系(核心)模块,以减轻该问题。核心首先利用Vanilla关系块来模拟所有文本提案中的关系(多个文本实例的子文本),并以对比的方式进一步通过实例级子文本鉴别来增强关系推理。这种方式自然地学习了文本提案的实例感知表示,从而促进了场景文本检测。我们将核心模块集成到蒙版R-CNN的两级文本检测器中,并设计了我们的文本探测器核心文本。四个基准测试的广泛实验证明了核心文本的优越性。代码可用:\ url {https://github.com/jylins/core-text}。
translated by 谷歌翻译
作为SE(3)的基本组成部分 - Quivariant的深度特色学习,可转向卷积最近展示了其3D语义分析的优势。然而,优点由昂贵的体积数据上的昂贵计算带来,这可以防止其实际用途,以便有效地处理固有的稀疏的3D数据。在本文中,我们提出了一种新颖的稀疏转向卷积(SS-Char)设计,以解决缺点; SS-DIM大大加快了稀疏张量的可操纵卷积,同时严格保留了SE(3)的性质。基于SS-CONV,我们提出了一种用于精确估计对象姿势的一般管道,其中一个关键设计是一种特征转向模块,其具有SE(3)的完全优势,并且能够进行高效的姿势改进。为了验证我们的设计,我们对三个对象语义分析的三个任务进行了彻底的实验,包括实例级别6D姿势估计,类别级别6D姿势和大小估计,以及类别级6D姿态跟踪。我们基于SS-CONV的提议管道优于三个任务评估的几乎所有指标上的现有方法。消融研究还在准确性和效率方面展示了我们的SS-CONVES对替代卷积的优越性。我们的代码在https://github.com/gorilla-lab-scut/ss-conv公开发布。
translated by 谷歌翻译
Witnessing the impressive achievements of pre-training techniques on large-scale data in the field of computer vision and natural language processing, we wonder whether this idea could be adapted in a grab-and-go spirit, and mitigate the sample inefficiency problem for visuomotor driving. Given the highly dynamic and variant nature of the input, the visuomotor driving task inherently lacks view and translation invariance, and the visual input contains massive irrelevant information for decision making, resulting in predominant pre-training approaches from general vision less suitable for the autonomous driving task. To this end, we propose PPGeo (Policy Pre-training via Geometric modeling), an intuitive and straightforward fully self-supervised framework curated for the policy pretraining in visuomotor driving. We aim at learning policy representations as a powerful abstraction by modeling 3D geometric scenes on large-scale unlabeled and uncalibrated YouTube driving videos. The proposed PPGeo is performed in two stages to support effective self-supervised training. In the first stage, the geometric modeling framework generates pose and depth predictions simultaneously, with two consecutive frames as input. In the second stage, the visual encoder learns driving policy representation by predicting the future ego-motion and optimizing with the photometric error based on current visual observation only. As such, the pre-trained visual encoder is equipped with rich driving policy related representations and thereby competent for multiple visuomotor driving tasks. Extensive experiments covering a wide span of challenging scenarios have demonstrated the superiority of our proposed approach, where improvements range from 2% to even over 100% with very limited data. Code and models will be available at https://github.com/OpenDriveLab/PPGeo.
translated by 谷歌翻译
Deep learning-based vulnerability detection models have recently been shown to be effective and, in some cases, outperform static analysis tools. However, the highest-performing approaches use token-based transformer models, which do not leverage domain knowledge. Classical program analysis techniques such as dataflow analysis can detect many types of bugs and are the most commonly used methods in practice. Motivated by the causal relationship between bugs and dataflow analysis, we present DeepDFA, a dataflow analysis-guided graph learning framework and embedding that uses program semantic features for vulnerability detection. We show that DeepDFA is performant and efficient. DeepDFA ranked first in recall, first in generalizing over unseen projects, and second in F1 among all the state-of-the-art models we experimented with. It is also the smallest model in terms of the number of parameters, and was trained in 9 minutes, 69x faster than the highest-performing baseline. DeepDFA can be used with other models. By integrating LineVul and DeepDFA, we achieved the best vulnerability detection performance of 96.4 F1 score, 98.69 precision, and 94.22 recall.
translated by 谷歌翻译
Recent advances on text-to-image generation have witnessed the rise of diffusion models which act as powerful generative models. Nevertheless, it is not trivial to exploit such latent variable models to capture the dependency among discrete words and meanwhile pursue complex visual-language alignment in image captioning. In this paper, we break the deeply rooted conventions in learning Transformer-based encoder-decoder, and propose a new diffusion model based paradigm tailored for image captioning, namely Semantic-Conditional Diffusion Networks (SCD-Net). Technically, for each input image, we first search the semantically relevant sentences via cross-modal retrieval model to convey the comprehensive semantic information. The rich semantics are further regarded as semantic prior to trigger the learning of Diffusion Transformer, which produces the output sentence in a diffusion process. In SCD-Net, multiple Diffusion Transformer structures are stacked to progressively strengthen the output sentence with better visional-language alignment and linguistical coherence in a cascaded manner. Furthermore, to stabilize the diffusion process, a new self-critical sequence training strategy is designed to guide the learning of SCD-Net with the knowledge of a standard autoregressive Transformer model. Extensive experiments on COCO dataset demonstrate the promising potential of using diffusion models in the challenging image captioning task. Source code is available at \url{https://github.com/YehLi/xmodaler/tree/master/configs/image_caption/scdnet}.
translated by 谷歌翻译